Stable Diffusion艺术之道 ——AI绘画生成基本原理

<!-- wp:heading {"textAlign":"center"} -->

<h2 class="has-text-align-center"><strong>                                    Stable Diffusion艺术之道 ——AI绘画生成基本原理</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center">原文:<a href="https://jalammar.github.io/illustrated-stable-diffusion/">https://jalammar.github.io/illustrated-stable-diffusion/</a></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center">翻译:战略信息部 林隽 963963</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>          使用人工智能生成图像是AI最新的能力,令人叹为观止。从文本描述创造出惊人的视觉效果,就像魔法一般,这清楚地表明人类创造艺术的方式正在发生变化。Stable Diffusion的发布是这一发展过程中的一个醒目的里程碑,因为它使得高性能模型(以图像质量、速度以及相对较低的计算资源、内存要求来衡量)可以普罗大众。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>          在尝试了人工智能生成图像后,你可能会想了解它是如何工作的。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>          下面是一个关于stable diffusion如何工作的简单介绍。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":29,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-1024x440.png" alt="" class="wp-image-29"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图1</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      stable diffusion在很多方面都很灵活,首先我们来看看只用文本生成图像(text2img)。图1展示了一个文本输入的例子以及生成的图像(完整的提示在图左)。除了文本到图像外,图2展示了另一个主要的使用方式是让它改变图像(因此输入是文本+图像)。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":51,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-1-1024x351.png" alt="" class="wp-image-51"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图2</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     让我们开始深入探索,因为这有助于解释组件、它们之间的相互作用以及图像生成选项/参数的含义。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>stable diffusion的组件</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>      stable diffusion是由几个组件和模型组成的系统,而不是一个单一的模型。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      当我们深入探索时,可以做出的首个观察:有一个文本理解组件,它将文本信息转换成数值形式,以捕获文本中的思想。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":52,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-2-1024x424.png" alt="" class="wp-image-52"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图3</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        从高层次的视角开始,稍后将深入介绍机器学习的更多细节。但是,可以说,这个文本编码器是一个特殊的转换器语言模型(技术表述:CLIP模型的文本编码器)。它接收输入文本,并输出一个数字列表,以表示文本中的每个单词、标记(每个标记一个向量)。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       然后将这些信息呈现给图像生成器,它本身由几个组件组成。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":53,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-3-1024x413.png" alt="" class="wp-image-53"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图4</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       图像生成器经过两个步骤:</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>1、<strong>图像信息创建</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        这个组件是Stable Diffusion的秘诀,相对于以前的模型,这里是实现性能大量提升的关键所在。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        这个组件会运行多个步骤来生成图像信息。这就是Stable Diffusion接口和库中的步骤参数,通常默认为50或100。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>         图像信息创建器完全在图像信息空间(或潜在空间)中工作。稍后我们将详细地讨论这意味着什么。这个特性使它比以前在像素空间中工作的扩散模型更快。从技术角度来看,这个组件由一个UNet神经网络和一个调度算法组成。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>“Diffusion”一词描述了该组件中发生的情况。正是信息的步骤化处理,最终生成高质量的图像(由下一个组件,图像解码器生成)。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":54,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-4-1024x425.png" alt="" class="wp-image-54"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图5</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>2、<strong>图像解码器</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       图像解码器从信息创建者(指用户)那里获得的信息中绘制出一幅图画。它仅在过程结束时运行一次以生成最终的位图图像。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":55,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-5-1024x397.png" alt="" class="wp-image-55"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图6</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>我们来看看stable diffusion的三个主要组件(每个都有自己的神经网络):</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p><strong>ClipText</strong>用于文本编码。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>输入:文本。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>输出:77个嵌入向量的标记,每个维度为768。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p><strong>UNet +调度程序</strong>逐渐处理/扩散信息(潜在)空间中的信息。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>输入:文本嵌入和由噪声组成的起始多维数组(结构化数字列表,也称为张量)。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>输出:处理后的信息数组。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p><strong>自动编码器解码器</strong>使用处理后的信息数组绘制最终图像。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>输入:处理后的信息数组(维度:(4,64,64))</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>输出:最终图像(维度:(3,512,512),即(红/绿/蓝,宽度,高度))</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":56,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-6-1024x393.png" alt="" class="wp-image-56"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图7</strong></p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>Diffusion到底是什么?</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>     Diffusion是在粉色的“图像信息创建器”组件内部发生的过程。具有表示输入文本的标记嵌入和一个随机的起始图像信息数组(也称为潜变量),该过程产生的信息数组将被图像解码器用来绘制最终图像。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":57,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-7-1024x533.png" alt="" class="wp-image-57"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图8</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>         这个过程是逐步进行的。每一步都会添加更多相关的信息。为了获得这个过程的直观理解,我们可以检查随机潜变量数组,发现它转换成了视觉噪声。在这种情况下,视觉检查是通过图像解码器来实现。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":58,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-8-1024x462.png" alt="" class="wp-image-58"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图9</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       扩散发生在多个步骤中,每个步骤都对输入的潜在数组进行操作,产生另一个潜在数组,该数组更接近输入文本以及从已训练模型中获取的图像的所有视觉信息。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":59,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-9-1024x549.png" alt="" class="wp-image-59"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图10</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     可视化这些潜在因素的一组,查看每个步骤添加了什么信息。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":60,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-2-1-1024x536.png" alt="" class="wp-image-60"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图11</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>这个过程令人叹为观止。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":61,"sizeSlug":"full","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-full"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image.gif" alt="" class="wp-image-61"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图12</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>    在这种情况下,步骤2到步骤4之间发生了一些特别有趣的事情,就好像轮廓是从噪音中浮现出来一样。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":62,"sizeSlug":"full","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-full"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-1.gif" alt="" class="wp-image-62"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图13</strong></p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>Diffusion如何工作</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>        利用扩散模型生成图像的核心思想依赖于我们拥有强大的计算机视觉模型这一事实。给予足够大的数据集,这些模型可以学习复杂的操作。扩散模型通过将问题构建为以下形式来实现图像生成:</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        假设我们有一个图像,产生一些噪点并添加到图像中。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":63,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-10-1024x560.png" alt="" class="wp-image-63"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图14</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     这可以被视为一个训练范例。可以使用同样的公式创建大量训练范例来训练我们图像生成模型的核心组件。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":64,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-11-1024x562.png" alt="" class="wp-image-64"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图15</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       这个例子从图像(数量0,无噪声)到全噪声(数量4,全噪声)展示了几个噪声量值,我们可以轻松控制图像中的噪声数量,因此我们可以将其分布在数十个步骤中,为训练数据集中的每个图像创建数十个训练示例。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":65,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-12-1024x588.png" alt="" class="wp-image-65"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图16</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>有了这个数据集,就可以训练噪声预测器,最终获得一个非常出色的噪声预测器,在特定配置下运行时可以生成图像。如果你有机器学习的经验,应该很熟悉训练步骤:</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":66,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-13-1024x582.png" alt="" class="wp-image-66"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图17</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     现在让我们来看看图像是如何生成的。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>通过去除噪声来绘制图像</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>       训练好的噪声预测器可以获取一个噪声图像及去噪步骤的数值,并能够预测噪声切片。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":67,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-14-1024x632.png" alt="" class="wp-image-67"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图18</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       从图像中减去采样噪声,通过预测可以得到一幅更接近模型训练过的图像(不是完全一样的图像,而是分布——在这个世界里,天空通常是蓝色的,地面上人们有两只眼睛,猫的外形也有特定的特征——尖耳朵,明显不怎么搭理人)。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":68,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-15-1024x567.png" alt="" class="wp-image-68"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图19</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        如果训练模型的数据集是审美上令人愉悦的图像(例如,Stable Diffusion 训练所用的 LAION Aesthetics),那么生成的图像往往也是审美上令人愉悦的。如果我们在LOGO图像上进行训练,我们最终会得到一个LOGO生成模型。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":69,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-16-1024x560.png" alt="" class="wp-image-69"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图20</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       这就是Denoising Diffusion Probabilistic Models中所述的扩散模型生成图像。现在,你应该已经对扩散有了了解,而且不仅stable diffusion,Dall-E 2和Google Imagen的主要组成部分也是如此。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      到目前为止描述的扩散过程是在不使用任何文本数据的情况下生成图像。因此,如果我们部署此模型,它将生成出色的图像,但我们无法控制它是金字塔的图像、猫的图像还是其他任何东西。在下一节中,将描述如何将文本纳入过程,以便控制模型生成的图像类型。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>加速:在压缩(潜在)数据上进行扩散而不是像素图像</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>      为了加速图像生成过程,stable diffusion论文不是在像素图像本身上运行扩散过程,而是在图像的压缩版本上运行扩散过程。论文将其称为“离散潜在空间”。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      该压缩(以及稍后的解压缩/绘制)是通过自动编码器完成的。自动编码器使用其编码器将图像压缩到潜在空间,然后使用解码器仅使用压缩信息重建图像。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":70,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-17-1024x378.png" alt="" class="wp-image-70"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图21</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       现在,已经在压缩潜变量上完成了前向扩散过程。噪声切片是应用于这些潜变量,而不是像素图像。因此,噪声预测器实际上是训练来预测压缩表示(潜变量空间)中的噪声的。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":71,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-18-1024x301.png" alt="" class="wp-image-71"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图22</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      采用自编码器的编码器的正向过程是我们生成训练噪音预测器的数据的方法。一旦训练完成,我们可以通过运行反向过程(使用自编码器的解码器)来生成图像。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":72,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-19-1024x578.png" alt="" class="wp-image-72"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图23</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     这两个流程如LDM/stable diffusion论文的图3所示:</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":73,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-20-1024x508.png" alt="" class="wp-image-73"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图24</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       该图还显示了“条件”组件,在该例中是描述模型应该生成什么图像的文本提示。因此,让我们深入研究文本组件。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>文本编码器:语言模型转换器</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>      转换器语言模型被用作语言理解组件,该组件接收文本提示并产生标记嵌入。发布的Stable Diffusion模型使用ClipText(基于GPT的模型),而论文使用BERT。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      Imagen论文表明,语言模型的选择是一个重要的决策。替换更大的语言模型对生成的图像质量的影响比更大的图像生成组件的影响更大。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":74,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-21-1024x398.png" alt="" class="wp-image-74"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图25</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        早期的stable diffusion模型只是将OpenAI发布的预先训练的ClipText模型插入其中。未来的模型可能会切换到新发布的、更大的OpenCLIP CLIP变体(2022年11月,stable diffusionV2使用OpenClip)。这批新模型包括参数达到354M的文本模型,而ClipText的参数为63M。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>CLIP是如何被训练的?</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>       CLIP是通过一个图像及其标题数据集来训练的。想象一个数据集看起来像这样,只是有4亿张图片及其标题:</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":75,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-22-1024x356.png" alt="" class="wp-image-75"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图26</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       实际上,CLIP是通过从网上爬取的图像及其“alt”标签进行训练的。CLIP是图像编码器和文本编码器的结合体。它的训练过程可以简化为把图像和它的标题想象成一个整体。分别使用图像编码器和文本编码器对它们进行编码。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":76,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-23-1024x427.png" alt="" class="wp-image-76"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图27</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>        随后使用余弦相似度比较生成的嵌入。当开始训练过程时,即使文本正确描述了图像,相似度也会很低。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":77,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-24-1024x450.png" alt="" class="wp-image-77"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图28</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     更新这两个模型,以便下次嵌入它们时,产生的嵌入是相似的。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":78,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-25-1024x449.png" alt="" class="wp-image-78"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图29</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>         通过在数据集上以大批量重复此操作,最终能够让编码器产生嵌入,其中狗的图片和句子“狗的照片”相似。就像在word2vec中一样,训练过程还需要包括图像和标题不匹配的<strong>负面示例</strong>,模型需要为它们分配低相似性分数。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>将文本信息引入图像生成过程</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>       为了让文本成为图像生成过程的一部分,必须调整噪声预测器以将文本用作输入。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":79,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-26-1024x447.png" alt="" class="wp-image-79"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图30</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       数据集现在包括编码文本。由于在潜在空间中运行,因此输入图像和预测噪声都在潜在空间中。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":80,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-27-1024x612.png" alt="" class="wp-image-80"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图31</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>       为了更好地了解文本标识在Unet中的使用,让我们更深入地了解Unet。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>Unet噪声预测器(不使用文本)</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>       先看一个不使用文本的扩散Unet。它的输入和输出如下:</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":81,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-28-1024x315.png" alt="" class="wp-image-81"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图32</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>其中可见:</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     Unet是一系列层,负责转换潜在数组。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     每一层都是基于上一层的输出进行操作的。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     其中一些输出会通过残差连接被传递到网络的后续处理中。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     时间步数被转换成序列长度嵌入向量,这些就是在层中使用的内容。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":82,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-29-1024x377.png" alt="" class="wp-image-82"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图33</strong></p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>基于文本的Unet噪声预测器层级</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>     现在来看看如何改变这个系统以包括对文本的关注。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":83,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-30-1024x316.png" alt="" class="wp-image-83"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图34</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>     需要增加对文本输入(技术术语:文本调节)的支持,系统的主要变化是在ResNet块之间添加一个注意层。</p>

<!-- /wp:paragraph -->


<!-- wp:image {"align":"center","id":84,"sizeSlug":"large","linkDestination":"none"} -->

<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://172.24.140.82/wordpress/wp-content/uploads/2023/01/image-31-1024x462.png" alt="" class="wp-image-84"/></figure></div>

<!-- /wp:image -->


<!-- wp:paragraph {"align":"center"} -->

<p class="has-text-align-center"><strong>图35</strong></p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>      注意,ResNet 块并不直接查看文本。但是注意层将这些文本表示合并到潜变量中。那么下一个 ResNet 就可以在处理过程中利用这些合并的文本信息。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>结论</strong></h2>

<!-- /wp:heading -->


<!-- wp:paragraph -->

<p>     希望这能给你一个关于 Stable Diffusion 工作原理的初步了解。还有很多其他的概念,相信一旦你熟悉了上面的基本构建块,它们就会变得更容易理解。以下资源是很有用的下一步参考。</p>

<!-- /wp:paragraph -->


<!-- wp:paragraph -->

<p>    如果有任何错漏或反馈建议,请在99U上联系963963。</p>

<!-- /wp:paragraph -->


<!-- wp:heading -->

<h2><strong>参考来源</strong></h2>

<!-- /wp:heading -->


<!-- wp:list {"ordered":true,"type":"1"} -->

<ol type="1"><li><a href="https://huggingface.co/blog/stable_diffusion">https://huggingface.co/blog/stable_diffusion</a></li><li><a href="https://huggingface.co/blog/annotated-diffusion">https://huggingface.co/blog/annotated-diffusion</a></li><li><a href="https://www.youtube.com/watch?v=J87hffSMB60">https://www.youtube.com/watch?v=J87hffSMB60</a></li><li><a href="https://www.youtube.com/watch?v=ltLNYA3lWAQ">https://www.youtube.com/watch?v=ltLNYA3lWAQ</a></li><li><a href="https://ommer-lab.com/research/latent-diffusion-models/">https://ommer-lab.com/research/latent-diffusion-models/</a></li><li><a href="https://lilianweng.github.io/posts/2021-07-11-diffusion-models/">https://lilianweng.github.io/posts/2021-07-11-diffusion-models/</a></li><li><a href="https://www.youtube.com/watch?v=_7rMfsA24Ls&amp;ab_channel=JeremyHoward">https://www.youtube.com/watch?v=_7rMfsA24Ls&amp;ab_channel=JeremyHoward</a></li></ol>

<!-- /wp:list -->


返回:Stable Diffusion艺术之道 ——AI绘画生成基本原理

本文由“公众号文章抓取器”生成,请忽略上文所有联系方式或指引式信息。有问题可以联系:五人工作室,官网:www.Wuren.Work,QQ微信同号1976.424.585